perm filename DENNET.CRI[F77,JMC] blob
sn#310794 filedate 1977-10-15 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 p.36 The argument that an intentional event can't cause a physical
C00012 ENDMK
Cā;
p.36 The argument that an intentional event can't cause a physical
event is exactly like an argument that the boiler can't have blown
up because the gas in it was too hot.
Well, he is headed in the same direction.
I incline to the view that intensional entities in the theory of
persons are just ordinary theoretical entities. Their relation
to neurophysiology is like that of chemistry to physics or neuro
physiology itself to physics and chemistry. Mental concepts are
needed, because we observe people on that level, and because
laws of mental life can be formulated and used more conveniently
than the underlying neurophysiology. The only difference has
to do with an observer's discussion of his own mental life.
When he discusses other people's mental lives or those of machines,
he accepts the fact that he must cover difficulties of detailed
observation with tenuous theory. We know there is plenty going
on in other people's brains. Fortunately, our own brains give
us good hints, but almost any good theery of mental life could
be confirmed by smart enough robots with quite different mental
lives.
129 Philosophers' rules: "Tamper not with ordinary language".
"Avoid mechanism". These rules seem to be anti-scientific, but
naturally many attempts that involve violating them will lead to
inappropriate terminology and wrong mechanisms that will have
to be discarded. Nevertheless, it is just as necessary to
transcend ordinary language and discover mechanism in philosophy
as it is in physics.
145 Television cameras and photographic film divide up the spectrum
in approximately the same way as does the human eye. This was some
trouble to achieve. Anyway, our colors are not completely subjective.
171 The statement that acts of "willing" don't exist is plausible but
merely contingent. Consider a theory that interposes an act of
"willing" to move the arm before actually moving the arm. What happens
in between is that the amount of "will-power" expended is compared
with the "lethargy". If the "will-power" is insufficient, the action
doesn't take place. In order to make a verifiable theory, we will
suppose that certain actions increase the amount of "will-power"
available by definite amounts. Suppose that certain acts require
definite amounts of "will-power" and use up that much, so that it
has to be restored. A theory of this kind can be experimentally
checked, because the "will-power" would be interchangeably available
for different tasks, and this would permit "weighing" amounts of
"will-power" just as masses can be compared in weight by comparing
their ability tip see-saws.
The objection might be made that this way lies madness, because
even more processes could be interpolated between the will and the
act. Indeed they can, and our only protection against them is that
there is a lack of evidence in their favor. As it is, however, Dennett's
philosophical arguments could be overthrown by a discovery in
experimental psychology.
******
Suppose we try to compare Dennett's book with the needs of AI for
a theory of mental processes to build into robots. Almost all
that Dennett says seems true, but it isn't in a form that would
be useful for AI. AI seems to want a different organization:
1. We need actual formalisms, e.g. predicate logic formalizations
of intentional act, belief, etc.
2. We need to build systems that are adequate for increasing
domains of contact with people rather than trying to get at the "fundamental"
nature of the concepts at once. For example, a program that could
reason about what travel agents know and will do upon request and
how they might goof would require less knowledge than a program
that had to supervise children or advise judges and policemen.
(Boo! to Weizenbaum!).
******
Why the law of effect will not go away.
The concept of inner environment is neat, but we have evolved beyond
that. Namely, we can model aspects of the environment including
non-temporal aspects.
Besides generate-and-test, there is divide-and-conquer, and pattern
matching which may be a more sophisticated version of generate
and test.
Suppose my computer solves the AI problem which has been puzzling me,
and never, in my lifetime, could I have solved it. The key is "in
my lifetime"! Any computer program I write, merely extends my
effective lifetime - perhaps by a factor of 1,000,000, but anything
my program can do, I could do if I had enough time. Dennett's random
element is unnecessary and useless.
******
"Intentional Systems"
1. I think one could imagine a system to which one would ascribe
beliefs and almost all of whose contentful beliefs were false
and also almost always lied about its beliefs. Consider a paranoid
who wants to stay out of the asylum.
2. virtus dormativa could be a reasonable concept.
3. my criteria for belief do not at all require talking.
*******
"Artificial intelligence as philosophy and as psychology"
1. "Have you ever ridden an antelope?" Why is it easy?
2. 17 Fish don't understand swimming.
The LISP interpreter doesn't understand LISP programs. It only obeys
them. Likewise the computer hardware doesn't understand machine
language programs. These are made clear by the fact that the CPU
internally deals with only one instruction and its data at a time,
and the LISP interpreter is looking at one function at a time.
You might want to say that the machine "understands" instructions
(I wouldn't), but it doesn't even "understand" programs.
"self understanding representations" would be something quite
different; they don't exist but are possible.
You describe the "frame problem" approximately as in (McCarthy and
Hayes, "Some philosophical problems from the standpoint of artificial
intelligence". Minsky swiped the word "frame" to mean something
quite different; Simon criticized him for that. Calling it Kant's
problem might solve that problem and also might magnify its perceived
importance. I am ambivalent about doing that.
*******
"Why you can't make a computer that feels pain."